In computer science, a sparse file is a type of computer file that attempts to use file system space more efficiently when blocks allocated to the file are mostly empty. This is achieved by writing brief information (metadata) representing the empty blocks to disk instead of the actual "empty" space which makes up the block, using less disk space. The full block size is written to disk as the actual size only when the block contains "real" (non-empty) data.
When reading sparse files, the file system transparently converts metadata representing empty blocks into "real" blocks filled with zero bytes at runtime. The application is unaware of this conversion.
Most modern file systems support sparse files, including most Unix variants and NTFS,[1] but notably not Apple's HFS+. Sparse files are commonly used for disk images, database snapshots, log files and in scientific applications.
Contents |
The advantage of sparse files is that storage is only allocated when actually needed: disk space is saved, and large files can be created even if there is insufficient free space on the file system.
Disadvantages are that sparse files may become fragmented; file system free space reports may be misleading; filling up file systems containing sparse files can have unexpected effects (such as disk-full or quota-exceeded errors when merely overwriting an existing portion of a file that happened to have been sparse); and copying a sparse file with a program that does not explicitly support them may copy the entire, uncompressed size of the file, including the zero sections which are not allocated on disk—losing the benefits of the sparse property in the file. Sparse files are also not fully supported by all backup software or applications.
Sparse files are typically handled transparently to the user. But the differences between a normal file and sparse file become apparent in some situations.
The Unix command:
dd if=/dev/null of=sparse-file bs=1k seek=5120
will create a file of five mebibytes in size, but with no data stored on disk (only metadata). (GNU dd has this behavior because it calls ftruncate to set the file size; other implementations may merely create an empty file.)
Similarly the truncate command may be used, if available:
truncate -s 5M <filename>
The -s option of the ls command shows the occupied space in blocks, and -k the apparent size in blocks too:
ls -lks sparse-file
Or use -h to print both in human readable format.
Alternatively, try the du command which prints the occupied space, while ls print the apparent size. The option --block-size=1 prints the occupied space in bytes instead of blocks, so that it can be compared to the ls output:
du --block-size=1 sparse-file ls -l sparse-file
Normally, the GNU version of cp is good at detecting whether a file is sparse, so it suffices to run:
cp sparse-file new-file
and new-file will be sparse. However, GNU cp does have a --sparse=WHEN
option.[2] This is especially useful if a sparse-file has somehow become non-sparse (i.e. the empty blocks have been written out to disk in full). Disk space can be recovered by doing:
cp --sparse=always formerly-sparse-file recovered-sparse-file
Most cp implementations do not support the --sparse
option and will always expand sparse files, like FreeBSD's cp. A partially-viable alternative on those systems is to use rsync with its own --sparse
option[3] instead of cp. Unfortunately you cannot combine --sparse
with --inplace
, so rsync'ing huge files across the network will always be wasteful of either network bandwidth or disk bandwidth.
cat somefile | cp --sparse=always /proc/self/fd/0 new-sparse-file
--sparse
and --inplace
issue.